For an autonomous vehicle, the ability to sense its surroundings and to build an overall representation of the environment by fusing different sensor data streams is fundamental. To this end, the poses of all sensors need to be accurately determined. Traditional calibration methods are based on: 1) using targets specifically designed for calibration purposes in controlled environments, 2) optimizing a quality metric of the point clouds collected while traversing an unknown but static environment, or 3) optimizing the match among per-sensor incremental motion observations along a motion path fulfilling special requirements. In real scenarios, however, the online applicability of these methods can be limited, as they are typically highly dynamic, contain degenerate paths, and require fast computations. In this paper, we propose an approach that tackles some of these challenges by formulating the calibration problem as a joint but structured optimization problem of all sensor calibrations that takes as input a summary of the point cloud information consisting of ground points and pole detections. We demonstrate the efficiency and quality of the results of the proposed approach in a set of experiments with LiDAR simulation and real data from an urban trip.
translated by 谷歌翻译
Artificial neural networks can learn complex, salient data features to achieve a given task. On the opposite end of the spectrum, mathematically grounded methods such as topological data analysis allow users to design analysis pipelines fully aware of data constraints and symmetries. We introduce a class of persistence-based neural network layers. Persistence-based layers allow the users to easily inject knowledge about symmetries (equivariance) respected by the data, are equipped with learnable weights, and can be composed with state-of-the-art neural architectures.
translated by 谷歌翻译
We consider the problem of modelling high-dimensional distributions and generating new examples of data with complex relational feature structure coherent with a graph skeleton. The model we propose tackles the problem of generating the data features constrained by the specific graph structure of each data point by splitting the task into two phases. In the first it models the distribution of features associated with the nodes of the given graph, in the second it complements the edge features conditionally on the node features. We follow the strategy of implicit distribution modelling via generative adversarial network (GAN) combined with permutation equivariant message passing architecture operating over the sets of nodes and edges. This enables generating the feature vectors of all the graph objects in one go (in 2 phases) as opposed to a much slower one-by-one generations of sequential models, prevents the need for expensive graph matching procedures usually needed for likelihood-based generative models, and uses efficiently the network capacity by being insensitive to the particular node ordering in the graph representation. To the best of our knowledge, this is the first method that models the feature distribution along the graph skeleton allowing for generations of annotated graphs with user specified structures. Our experiments demonstrate the ability of our model to learn complex structured distributions through quantitative evaluation over three annotated graph datasets.
translated by 谷歌翻译
在本文中,我们在拓扑数据分析和几何深度学习之间建立了一个桥梁,调整了群体模棱两可的非企业运算符(Geneos)的拓扑理论,以在所有图表的空间上作用于在顶点或边缘加权的所有图。这是通过展示Geneo的一般概念可以用于转换图形并提供有关其结构的信息来完成的。这就需要引入广义定义和广义定义措施的新概念以及这些概念使我们能够在图之间构建基因的数学证据。实验部分结束了本文,说明了我们的操作员可能使用从图形中提取信息。本文是一系列研究线的一部分,该研究致力于为几何深度学习开发基因诺的组成和几何理论。
translated by 谷歌翻译
RNN-T模型由于其在线流媒体模式下运营的竞争力和能力,因此在文献和商业系统中广受欢迎。在这项工作中,我们进行了一项广泛的研究,比较了单调和原始RNN-T模型的几种预测网络体系结构。我们根据普通的最新构象编码器比较4种类型的预测网络,并在LibrisPeech和内部医学对话数据集上获得报告结果。我们的研究涵盖了离线批处理模式和在线流媒体方案。与以前的一些作品相反,我们的结果表明,当用作预测网络以及构象异构体编码器时,变压器并不总是胜过LSTM。受分数启发的启发,我们提出了一个新的简单预测网络体系结构N-CONCAT,它在我们在线流式传输基准测试中的表现优于其他。变压器和N-Gram降低的体系结构的表现非常相似,但在先前的上下文方面具有一些重要的不同行为。总体而言,与LSTM基线相比,我们获得了多达4.1%的相对相对改善,同时将预测网络参数降低了几乎数量级(8.4倍)。
translated by 谷歌翻译
Sigmorphon 2022关于词素分割的共享任务挑战了将单词分解为一系列词素的系统,并涵盖了大多数类型的形态:化合物,衍生和弯曲。子任务1,单词级词素细分,涵盖了9种语言的500万个单词(捷克,英语,西班牙语,匈牙利语,法语,意大利语,俄语,拉丁语,蒙古语),并收到了7个团队的13个系统提交,最佳系统平均为97.29%F1在所有语言中得分,英语(93.84%)到拉丁语(99.38%)。子任务2,句子级的词素细分,涵盖了3种语言的18,735个句子(捷克,英语,蒙古人),从3个团队中收到10个系统提交,最好的系统优于所有三种最先进的子字体化方法(BPE(BPE),Ulm,Morfessor2)绝对30.71%。为了促进错误分析并支持任何类型的未来研究,我们发布了所有系统预测,评估脚本和所有黄金标准数据集。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
图形生成建模中讨论的最多的一个问题之一是表示的排序。一个解决方案包括使用等分性的生成功能,确保排序不变性。在讨论了这种功能的一些性质之后,我们提出了3G-GaN,这是一个依赖于GAN和等价函数的3级模型。该模型仍在开发中。但是,我们展示了一些鼓励探索性实验,并讨论仍有待解决的问题。
translated by 谷歌翻译